Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
医学视觉和语言预训练提供了一种可行的解决方案,可以从医学图像和文本中提取有效的视觉和语言表示。但是,很少有研究专门研究该领域,以促进医学视觉和语言理解。在本文中,我们提出了一种自我监督的学习范式,该学习范式使用多模式掩盖的自动编码器(M $^3 $ ae),通过从随机掩盖的图像和文本中重新构造缺失的像素和代币来学习跨模式域知识。有三个关键设计可以使这种简单的方法起作用。首先,考虑到视觉和语言的不同信息密度,我们为输入图像和文本采用不同的掩蔽比,其中将较大的掩模比用于图像。其次,我们使用来自不同层的视觉和文本特征来执行重建,以处理视觉和语言中不同级别的抽象。第三,我们为视觉和语言解码器开发了不同的设计(即,视觉的变压器和语言的多层感知器)。为了进行全面的评估并促进进一步的研究,我们构建了包括三个任务的医学视觉和语言基准。实验结果证明了我们方法的有效性,在所有下游任务上都取得了最新的结果。此外,我们进行进一步的分析,以更好地验证方法的不同组成部分和预训练的各种设置。源代码可在〜\ url {https://github.com/zhjohnchan/m3ae}中获得。
translated by 谷歌翻译
基于深度学习的单图像超分辨率(SISR)方法引起了人们的关注,并在现代高级GPU上取得了巨大的成功。但是,大多数最先进的方法都需要大量参数,记忆和计算资源,这些参数通常会显示在当前移动设备CPU/NPU上时显示出较低的推理时间。在本文中,我们提出了一个简单的普通卷积网络,该网络具有快速最近的卷积模块(NCNET),该模块对NPU友好,可以实时执行可靠的超级分辨率。提出的最近的卷积具有与最近的UP采样相同的性能,但更快,更适合Android NNAPI。我们的模型可以很容易地在具有8位量化的移动设备上部署,并且与所有主要的移动AI加速器完全兼容。此外,我们对移动设备上的不同张量操作进行了全面的实验,以说明网络体系结构的效率。我们的NCNET在DIV2K 3X数据集上进行了训练和验证,并且与其他有效的SR方法的比较表明,NCNET可以实现高保真SR结果,同时使用更少的推理时间。我们的代码和预估计的模型可在\ url {https://github.com/algolzw/ncnet}上公开获得。
translated by 谷歌翻译
组织分割是病理检查的主要主机,而手动描述则过于繁重。为了协助这一耗时和主观的手动步骤,研究人员已经设计了自动在病理图像中分割结构的方法。最近,自动化机器和基于深度学习的方法主导了组织分割研究。但是,大多数基于机器和深度学习的方法都是使用大量培训样本进行监督和开发的,其中PixelWise注释很昂贵,有时无法获得。本文通过将端到端的深层混合模型与有限的指标集成以获取准确的语义组织分割,从而引入了一种新颖的无监督学习范式。该约束旨在在计算优化函数期间集中深层混合模型的组成部分。这样做,可以大大减少当前无监督学习方法中常见的多余或空的班级问题。通过对公共和内部数据集的验证,拟议的深度约束高斯网络在组织细分方面取得了更好的性能(Wilcoxon签名级测试)更好的性能(平均骰子得分分别为0.737和0.735),具有改善与其他现有的无监督分割方法相比。此外,该方法与完全监督的U-NET相比,提出的方法具有相似的性能(P值> 0.05)。
translated by 谷歌翻译
深层隐式功能在各种3D计算机视觉任务中显示出显着的形状建模能力。一个缺点是,他们很难将3D形状表示为多个部分。当前的解决方案学习了各种原语,并直接在空间空间中融合了原语,这些原始物仍难以准确近似3D形状。为了解决这个问题,我们引入了一种新颖的隐式表示,以将单个3D形状表示为潜在空间中的一组零件,以高度准确和可解释的形状建模。我们在这里的洞察力是,在潜在空间中,零件学习和零件混合都比在空间空间中容易得多。我们将我们的方法命名为潜在分区隐式(LPI),因为它可以将全局形状建模施放到多个局部零件建模中,从而将全局形状统一分开。 LPI使用表面代码表示形状作为签名距离函数(SDF)。每个表面代码都是一个潜在代码,代表中心位于表面上的部分,这使我们能够灵活采用形状的内在属性或其他表面属性。最终,LPI可以重建形状和形状上的部分,它们都是合理的网格。 LPI是一种多级表示,可以在训练后将形状划分为不同数量的零件。可以在没有地面真相签名的距离,点正常或任何部分分区监督的情况下学习LPI。从重建精度和建模可解释性方面,LPI的表现优于广泛使用基准下的最新方法。我们的代码,数据和模型可在https://github.com/chenchao15/lpi上获得。
translated by 谷歌翻译
阅读理解是一个复杂的认知过程,涉及许多人类大脑活动。大量作品研究了在信息检索相关方案中阅读理解的模式和注意力分配。但是,关于阅读理解过程中人脑中发生的事情以及这些认知活动如何影响信息检索过程,知之甚少。此外,随着脑成像技术(例如脑电图(EEG))的进步,几乎可以实时收集大脑信号,并探索是否可以用作反馈来促进信息获取性能。在本文中,我们仔细设计了一项基于实验室的用户研究,以调查阅读理解过程中的大脑活动。我们的发现表明,神经反应随着不同类型的阅读内容而变化,即可以满足用户信息需求和无法无法满足的内容的内容。我们建议在阅读理解过程中以微观时间量表以微观时间量表来支持各种认知活动,例如认知负载,语义主题理解和推论处理。从这些发现中,我们说明了一些有关信息检索任务的见解,例如排名模型构建和界面设计。此外,我们建议有可能检测主动现实世界系统的阅读理解状态。为此,我们为基于脑电图的阅读理解建模(UERCM)提出了一个统一的框架。为了验证其有效性,我们基于脑电图特征进行了大量的实验,以进行两项阅读理解任务:回答句子分类和回答提取。结果表明,通过大脑信号提高两个任务的性能是可行的。
translated by 谷歌翻译
Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-of-experts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-of-the-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/.
translated by 谷歌翻译
Given that rich information is hidden behind ubiquitous numbers in text, numerical reasoning over text should be an essential skill of AI systems. To derive precise equations to solve numerical reasoning problems, previous work focused on modeling the structures of equations, and has proposed various structured decoders. Though structure modeling proves to be effective, these structured decoders construct a single equation in a pre-defined autoregressive order, potentially placing an unnecessary restriction on how a model should grasp the reasoning process. Intuitively, humans may have numerous pieces of thoughts popping up in no pre-defined order; thoughts are not limited to the problem at hand, and can even be concerned with other related problems. By comparing diverse thoughts and chaining relevant pieces, humans are less prone to errors. In this paper, we take this inspiration and propose CANTOR, a numerical reasoner that models reasoning steps using a directed acyclic graph where we produce diverse reasoning steps simultaneously without pre-defined decoding dependencies, and compare and chain relevant ones to reach a solution. Extensive experiments demonstrated the effectiveness of CANTOR under both fully-supervised and weakly-supervised settings.
translated by 谷歌翻译
Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-shot-setting tasks. However, studies have shown the lack of robustness still exists in prompt learning, since suitable initialization of continuous prompt and expert-first manual prompt are essential in fine-tuning process. What is more, human also utilize their comparative ability to motivate their existing knowledge for distinguishing different examples. Motivated by this, we explore how to use contrastive samples to strengthen prompt learning. In detail, we first propose our model ConsPrompt combining with prompt encoding network, contrastive sampling module, and contrastive scoring module. Subsequently, two sampling strategies, similarity-based and label-based strategies, are introduced to realize differential contrastive learning. The effectiveness of proposed ConsPrompt is demonstrated in five different few-shot learning tasks and shown the similarity-based sampling strategy is more effective than label-based in combining contrastive learning. Our results also exhibits the state-of-the-art performance and robustness in different few-shot settings, which proves that the ConsPrompt could be assumed as a better knowledge probe to motivate PLMs.
translated by 谷歌翻译
Multi-intent detection and slot filling joint models are gaining increasing traction since they are closer to complicated real-world scenarios. However, existing approaches (1) focus on identifying implicit correlations between utterances and one-hot encoded labels in both tasks while ignoring explicit label characteristics; (2) directly incorporate multi-intent information for each token, which could lead to incorrect slot prediction due to the introduction of irrelevant intent. In this paper, we propose a framework termed DGIF, which first leverages the semantic information of labels to give the model additional signals and enriched priors. Then, a multi-grain interactive graph is constructed to model correlations between intents and slots. Specifically, we propose a novel approach to construct the interactive graph based on the injection of label semantics, which can automatically update the graph to better alleviate error propagation. Experimental results show that our framework significantly outperforms existing approaches, obtaining a relative improvement of 13.7% over the previous best model on the MixATIS dataset in overall accuracy.
translated by 谷歌翻译